Deals Among Rational Agents
نویسندگان
چکیده
A formal framework is presented that models communication and promises in multi-agent interactions. This framework generalizes previous work on cooperation without communication, and shows the ability of communication to resolve conflicts among agents having disparate goals. Using a deal-making mechanism, agents are able to coordinate and cooperate more easily than in the communication-free model. In addit ion, there arc certain types of interactions where communication makes possible mutually beneficial activity that is otherwise impossible to coordinate. §1. I n t r o d u c t i o n 1.1 T h e M u i t i A g e n t P a r a d i g m a n d A I Research in artificial intelligence has focused for many years on the problem of a single intelligent agent. This agent, usually operating in a relatively static domain, was designed to plan, navigate, or solve problems under certain simplifying assumptions, most notable of which was the absence of other intelligent entities. The presence of mult iple agents, however, is an unavoidable condition of the real world. People must plan actions taking into account the potential actions of others, which might be a help or a hindrance to their own activities. In order to reason about others' actions, a person must be able to model their beliefs and desires. The artif icial intelligence community has only lately come to address the problems inherent in multi-agent act iv i ty. A community of researchers, working on distributed artif icial intelligence (DAI ) , has arisen. Even as they have begun their work, however, these researchers have added on a new set of simplifying assumptions that severely restrict the applicabil i ty of their results. 1.2 Benevo len t Agen ts Vi r tual ly all researchers in D A I have assumed that the agents in their domains have common or non-conflicting goals. Work has thus proceeded on the question of how these agents can best help one another in carrying out their common tasks [3, 4, 6, 7, 24], or how they can avoid This research has been supported by DARPA under N A V E L E X grant number N00039-83-C-0136. interference while using common resources [10, 11]. Mu l t i ple agent interactions are studied so as to gain the benefits of increased system efficiency or increased capabilities. Of course, when there is no conflict, there is no need to study the wide range of interactions that can occur among intelligent agents. A l l agents are fundamentally assumed to be helping one another, and wi l l trade data and hypotheses as well as carry out tasks that are requested of them. We call this aspect of the paradigm the benevolent agent assumption. 1.3 I n t e r a c t i o n s of a M o r e Gene ra l N a t u r e In the real world, agents are not necessarily benevolent in their dealings wi th one another. Each agent has its own set of desires and goals, and wi l l not necessarily help another agent wi th information or wi th actions. Of course, while conflict among agents exists, it is not total . There is often potential for compromise and mutual ly beneficial activity. Previous work in distributed artif icial intelligence, bound to the benevolent agent assumption, has generally been incapable of handling these types of interactions. Intelligent agents capable of interacting even when their goals are not identical would have many uses. For example, autonomous land vehicles (ALV's), operating in a combat environment, can be expected to encounter both friend and foe. In the latter case there need not be total conflict, and in the former there need not be an identi ty of interests. Other domains in which general interactions are prevalent arc resource allocation and management tasks. An automated secretary [12], for example, may be required to coordinate a schedule with another automated (or human) secretary, while properly representing the desires of its owner. The abil i ty to negotiate, to compromise and promise, would be desirable in these types of encounters. Finally, even in situations where all agents in theory have a single goal, the complexity of interaction might be better handled by a framework that recognizes and resolves sub-goal conflict in a general manner. For example, robots involved in the construction of a space station arc fundamentally motivated by the same goal; in the course of construction, however, there may be many minor conflicts caused by occurrences that cannot fully be predicted (e.g., fuel running low, dr i f t ing of objects in space). The bui lding agents, each wi th a different task, could then negotiate wi th one another and resolve conflict. Ninth International Joint Conference on Artificial Intelligence, August 1985. 92 J. Rosenschein and M. Genesereth 1.4 G a m e T h e o r y ' s M o d e l a n d Ex tens ions In modeling the interaction of agents wi th potentially diverse goals, we borrow the simple construct of game the ory, the payoff matr ix. Consider the fqllowiug matrix: The first player is assumed to choose one of the two rows, while the second simultaneously picks one of the two columns. The row-column outcome determines the payoff to each; for example, if the first player picks row b and the second player picks column c, the first player receives a payoff of 2 while the second receives a payoff of 5. If the choice results in an identical payoff for both players, a single number appears in the square (e.g., the a\d payoff above is 2 for both players). Payolls designate ut i l i ty to the players of a particular joint move [18]. Game theory addresses the issues of what moves a rational agent wil l make, given that other agents are also rational. We wish to remove the a priori assumption that other agents wil l necessarily be rational, while at the same time formalizing the concept of rationality in various ways. Our model in this paper allows communication among the agents in the interaction, and allows them to make binding promises to one another. The agents are assumed to be making their decisions based only on the current encounter (e.g., they won't intentionally choose a lower ut i l i ty in the hope of gaining more ut i l i ty later on). The formalism handles the case of agents wi th disparate goals as well as the case of agents with common goals.
منابع مشابه
Conditions for Optimal Outcomes of Negotiations about Resources
We analyse scenarios in which individually rational agents negotiate with each other in order to agree on deals to exchange resources. We consider two variants of the framework, one where agents can use money to compensate other agents for disadvantageous deals, and another one where this is not possible. In both cases, we analyse what types of deals are necessary and sufficient to guarantee an...
متن کاملExtremal Behaviour in Multiagent Contract Negotiation
We examine properties of a model of resource allocation in which several agents exchange resources in order to optimise their individual holdings. The schemes discussed relate to well-known negotiation protocols proposed in earlier work and we consider a number of alternative notions of “rationality” covering both quantitative measures, e.g. cooperative and individual rationality and more quali...
متن کاملReachability Properties in Distributed Negotiation Negotiation Can be as Hard as Planning: Deciding Reachability Properties of Distributed Negotiation Schemes
Distributed negotiation schemes offer one approach to agreeing an allocation of resources among a set of individual agents. Such schemes attempt to agree a distribution via a sequence of locally agreed ‘deals’ – reallocations of resources among the agents – ending when the result satisfies some accepted criteria. Our aim in this article is to demonstrate that some natural decision questions ari...
متن کاملThe complexity of deciding reachability properties of distributed negotiation schemes
Distributed negotiation schemes offer one approach to agreeing an allocation of resources among a set of individual agents. Such schemes attempt to agree a distribution via a sequence of locally agreed ‘deals’ – reallocations of resources among the agents – ending when the result satisfies some accepted criteria. Our aim in this article is to demonstrate that some natural decision questions ari...
متن کاملReputational bargaining and deadlines
How will agents behave when bargaining in the face of an upcoming deadline? If irrational types exist, committed to their bargaining positions, rational agents will imitate this tough behavior to gain reputational benefits, even though this may result in the deadline being missed. Notably, if agents are patient and irrational types are committed to fixed demands then agreement must necessarily ...
متن کاملSur le caractère égalitaire de l'allocation de ressources distribuée
The vast majority of multiagent research adressing the problem of the allocation of indivisible goods has concentrated on centralized mechanisms, typically combinatorial auctions. Distributed procedures considers instead that agents locally negotiate by agreeing on deals. In this paper, we consider the restricted negotiation framework consisting of bilateral deals only, and investigate the egal...
متن کامل